10 research outputs found

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    Effects of Force Feedback and Distractor Location on a CDTI Target Selection Task

    Get PDF
    AbstractNew flight deck technologies need to be implemented in order to support the projected rises in traffic levels. Future cockpit displays of traffic information (CDTIs) shall accommodate the altered responsibilities of pilots by facilitating more efficient routes and minimizing conflicts. However, the unstable nature of the cockpit may present challenges when precise inputs are required. The present study investigated the effects of force feedback and distractors on point-and-click movement times in a CDTI environment. Participants performed target selection tasks with multiple levels of force feedback and distractor location. Results implied that force feedback failed to benefit movement times relative to the standard computer mouse. However, substantial interactions between distractor effects, force levels, and other target characteristics are explored

    Effects of Type and Strength of Force Feedback on Movement Time in a Target Selection Task

    Get PDF
    Future cockpits will likely include new onboard technologies, such as cockpit displays of traffic information, to help support future flight deck roles and responsibilities. These new technologies may benefit from multimodal feedback to aid pilot information processing. The current study investigated the effects of multiple levels of force feedback on operator performance in an aviation task. Participants were presented with two different types of force feedback (gravitational and spring force feedback) for a discrete targeting task, with multiple levels of gain examined for each force feedback type. Approach time and time in target were recorded. Results suggested that the two highest levels of gravitational force significantly reduced approach times relative to the lowest level of gravitational force. Spring force level only affected time in target. Implications of these findings for the design of future cockpit displays will be discussed

    MOTION CONTROL METHODS FOR HUMAN-MACHINE COOPERATIVE SYSTEMS

    No full text
    An approach to motion guidance, called virtual fixtures, is applied to admittancecontrolled human-machine cooperative systems, which are designed to help a human operator perform tasks that require precision and accuracy near human physical limits. Virtual fixtures create guidance by limiting robot movement into restricted regions (Forbidden-region virtual fixtures) or influence its movement along desired paths (Guidance virtual fixtures). An implementation framework for vision-based virtual fixtures, with an adjustable guidance level, was developed for applications in ophthalmic surgery. Virtual fixtures were defined intraoperatively using a real-time workspace reconstruction obtained from a vision system and were implemented on a scaled-up retinal vein cannulation testbed. Two human-factors studies were performed to address design considerations of such a human-machine system. The first study demonstrates that cooperative manipulation offers superior accuracy to telemanipulation in a Fitts’ Law targeting task; however, they are comparable in task execution time. The second study shows that a large amount of guidance improves performance in path-following tasks; however, it worsens the performance on tasks that require off-path motion. Gain selection criteria were developed to determine an appropriate guidance level. Control methods to improve virtual fixture performance in the presence of robot compliance and human involuntary motion were also developed. To obtain an accurate estimate of end-effector location, positions obtained at discrete intervals from cameras are updated with a robot dynamic model using a Kalman filter. Considering both robot compliance and hand dynamics, the control methods effectively achieve the desired end-effector position under Forbidden-region virtual fixtures and the desired velocity for Guidance virtual fixtures. An experiment on a one-degree-of-freedom compliant human-machine system demonstrates the efficacy of the proposed controllers. A compliance model of the JHU Eye Robot was developed to enable controller implementation on a higher degree-of-freedom humanmachine cooperative system. The presented research provides key insights for virtual fixture design and implementation, particularly for fine manipulation tasks

    Virtual Fixture Control for Compliant Human-Machine Interfaces

    No full text
    Abstract — In human-machine collaborative systems, robot joint compliance and human-input dynamics lead to involuntary tool motion into undesired regions. To correct this, a set of methods, called Dynamically-Defined Virtual Fixtures, was previously proposed to create a movable virtual fixture that stops the user at a safe distance outside the forbidden region. In this work, a new method, called the Force-Based Method, was added. A vision system was introduced for real-time tool tracking. Additionally, we implemented a closed-loop controller with the virtual fixtures that allows the user to reach, but not enter, the forbidden region. Two user experiments were conducted on a 1-DOF testbed to evaluate the virtual fixture methods. The first experiment showed the effectiveness of the virtual fixtures in preventing the penetration. However, the absence of haptic feedback in the closed-loop implementation resulted in boundary penetration. In the second experiment, visual feedback was used to compensate for the lack of haptic feedback. User cognitive load was added as an inhibiting factor in a human-machine cooperative setting. The experiment showed a significant reduction in penetration with visual feedback, while the addition of cognitive load did not significantly increase the penetration. I

    Haptic Virtual Fixtures for Robot-Assisted Manipulation

    No full text
    Abstract. Haptic virtual fixtures are software-generated force and position signals applied to human operators in order to improve the safety, accuracy, and speed of robot-assisted manipulation tasks. Virtual fixtures are effective and intuitive because they capitalize on both the accuracy of robotic systems and the intelligence of human operators. In this paper, we present the design, analysis, and implementation of two categories of virtual fixtures: guidance virtual fixtures, which assist the user in moving the manipulator along desired paths or surfaces in the workspace, and forbidden-region virtual fixtures, which prevent the manipulator from entering into forbidden regions of the workspace. Virtual fixtures are analyzed in the context of both cooperative manipulation and telemanipulation systems, considering issues related to stability, passivity, human modeling, and applications.

    Vision Assisted Control for Manipulation Using Virtual Fixtures

    No full text
    We present the design and implementation of a vision-based system for cooperative manipulation at millimeter to micrometer scales. The system is based on an admittance control algorithm that implements a broad class of guidance modes called virtual fixtures. A virtual fixture, like a real fixture, limits the motion of a tool to a prescribed class or range of motions. We describe how both hard (unyielding) and soft (yielding) virtual fixtures can be implemented in this control framework. We then detail the construction of virtual fixtures for point positioning and curve following as well as extensions of these to tubes, cones, and sequences thereof. We also describe an implemented system using the JHU Steady Hand Robot. The system uses computer vision as a sensor for providing a reference trajectory, and the virtual fixture control algorithm then provides haptic feedback to implemented direct, shared manipulation. We provide extensive experimental results detailing both system performance and the effects of virtual fixtures on human speed and accuracy
    corecore